22 research outputs found

    Finding effective support-tree preconditioners

    Full text link
    In 1995, Gremban, Miller, and Zagha introduced support-tree preconditioners and a parallel algorithm called support-tree conjugate gradient (STCG) for solving linear systems of the form Ax = b, where A is an n × n Laplacian matrix. A Laplacian is a symmetric matrix in which the off-diagonal entries are non-positive, and the row and column sums are zero. A Laplacian A with 2m non-zeros can be interpreted as an undirected positively-weighted graph G with n vertices and m edges, where there is an edge between two nodes i and j with weight c((i, j)) = −Ai,j = −Aj,i if Ai,j = Aj,i < 0. Gremban et al. showed experimentally that STCG performs well on several classes of graphs commonly used in scientific computations. In his thesis, Gremban also proved upper bounds on the number of iterations required for STCG to converge for certain classes of graphs. In this paper, we present an algorithm for finding a preconditioner for an arbitrary graph G = (V, E) with n nodes, m edges, and a weight function c> 0 on the edges, where w.l.o.g., mine∈E c(e) = 1. Equipped with this preconditioner, STCG requires O(log 4 n · ïżœ ∆/α) iterations, where α = min U⊂V,|U|≀|V |/2 c(U, V \U)/|U | is the minimum edge expansion of the graph, and ∆ = maxv∈V c(v) is the maximum incident weight on any vertex. Each iteration requires O(m) work and can be implemented in O(log n) steps in parallel, using only O(m) space. Our results generalize to matrices that are symmetric and diagonally-dominant (SDD).

    Harvey: A Greybox Fuzzer for Smart Contracts

    Full text link
    We present Harvey, an industrial greybox fuzzer for smart contracts, which are programs managing accounts on a blockchain. Greybox fuzzing is a lightweight test-generation approach that effectively detects bugs and security vulnerabilities. However, greybox fuzzers randomly mutate program inputs to exercise new paths; this makes it challenging to cover code that is guarded by narrow checks, which are satisfied by no more than a few input values. Moreover, most real-world smart contracts transition through many different states during their lifetime, e.g., for every bid in an auction. To explore these states and thereby detect deep vulnerabilities, a greybox fuzzer would need to generate sequences of contract transactions, e.g., by creating bids from multiple users, while at the same time keeping the search space and test suite tractable. In this experience paper, we explain how Harvey alleviates both challenges with two key fuzzing techniques and distill the main lessons learned. First, Harvey extends standard greybox fuzzing with a method for predicting new inputs that are more likely to cover new paths or reveal vulnerabilities in smart contracts. Second, it fuzzes transaction sequences in a targeted and demand-driven way. We have evaluated our approach on 27 real-world contracts. Our experiments show that the underlying techniques significantly increase Harvey's effectiveness in achieving high coverage and detecting vulnerabilities, in most cases orders-of-magnitude faster; they also reveal new insights about contract code.Comment: arXiv admin note: substantial text overlap with arXiv:1807.0787

    Finger Search on Balanced Search Trees

    No full text
    This thesis introduces the concept of a heterogeneous decomposition of a balanced search tree and apply it to the following problems: ‱ How can finger search be implemented without changing the representation of a Red-Black Tree, such as introducing extra storage to the nodes? (Answer: Any degree-balanced search tree can support finger search without modification in its representation by maintaining an auxiliary data structure of logarithmic size and suitably modifying the search algorithm to make use of this auxiliary data structure.) ‱ Do Multi-Splay Trees, which is known to be O(log log n)-competitive to the optimal binary search trees, have the Dynamic Finger property? (Answer: This is work in progress. We believe the answer is yes.

    Towards Automatic Software Lineage Inference

    No full text
    Software lineage refers to the evolutionary relationship among a collection of software. The goal of software lineage inference is to recover the lineage given a set of program binaries. Software lineage can provide extremely useful information in many security scenarios such as malware triage and software vulnerability tracking. In this paper, we systematically study software lineage inference by exploring four fundamental questions not addressed by prior work. First, how do we automatically infer software lineage from program binaries? Second, how do we measure the quality of lineage inference algorithms? Third, how useful are existing approaches to binary similarity analysis for inferring lineage in reality, and how about in an idealized setting? Fourth, what are the limitations that any software lineage inference algorithm must cope with? Towards these goals we build ILINE, a system for automatic software lineage inference of program binaries, and also IEVAL, a system for scientific assessment of lineage quality. We evaluated ILINE on two types of lineage—straight line and directed acyclic graph—with large-scale real-world programs: 1,777 goodware spanning over a combined 110 years of development history and 114 malware with known lineage collected by the DARPA Cyber Genome program. We used IEVAL to study seven metrics to assess the diverse properties of lineage. Our results reveal that partial order mismatches and graph arc edit distance often yield the most meaningful comparisons in our experiments. Even without assuming any prior information about the data sets, ILINE proved to be effective in lineage inference—it achieves a mean accuracy of over 84% for goodware and over 72% for malware in our datasets.</p

    Confronting Hardness Using a Hybrid Approach ∗

    No full text
    A hybrid algorithm is a collection of heuristics, paired with a polynomial time selector S that runs on the input to decide which heuristic should be executed to solve the problem. Hybrid algorithms are of particular interest in scenarios where the selector must decide between heuristics that are “good ” with respect to different complexity measures. We focus on hybrid algorithms with a “hardnessdefying” property: for a problem Π, there is a set of complexity measures {mi} whereby Π is known or conjectured to be unsolvable for each mi, but for each heuristic hi of the hybrid algorithm, one can give a complexity guarantee for hi on the instances of Π that S selects for hi that is strictly better than mi. More concretely, we show that for several NP-hard problems, a given instance can either be solved exactly with substantially improved runtime (e.g. 2o(n)), or be approximated in polynomial time with an approximation ratio exceeding that of the known or conjectured inapproximability of the problem, assuming P ïżœ = NP.

    Confronting hardness using a hybrid approach

    No full text
    1 Introduction Motivation. Ever since the foundation of NP-completeness was laid down by Cook, Levin and other pioneers in the 1970s, our community has devised a number of algorithmic strategies to cope with hardness results
    corecore